Humans constantly interact with objects in daily life tasks. Capturing such processes and subsequently conducting visual inferences from a fixed viewpoint suffers from occlusions, shape and texture ambiguities, motions, etc. To mitigate the problem, it is essential to build a training dataset that captures free-viewpoint interactions. We construct a dense multi-view dome to acquire a complex human object interaction dataset, named HODome, that consists of $\sim$75M frames on 10 subjects interacting with 23 objects. To process the HODome dataset, we develop NeuralDome, a layer-wise neural processing pipeline tailored for multi-view video inputs to conduct accurate tracking, geometry reconstruction and free-view rendering, for both human subjects and objects. Extensive experiments on the HODome dataset demonstrate the effectiveness of NeuralDome on a variety of inference, modeling, and rendering tasks. Both the dataset and the NeuralDome tools will be disseminated to the community for further development.
translated by 谷歌翻译
Depth estimation is usually ill-posed and ambiguous for monocular camera-based 3D multi-person pose estimation. Since LiDAR can capture accurate depth information in long-range scenes, it can benefit both the global localization of individuals and the 3D pose estimation by providing rich geometry features. Motivated by this, we propose a monocular camera and single LiDAR-based method for 3D multi-person pose estimation in large-scale scenes, which is easy to deploy and insensitive to light. Specifically, we design an effective fusion strategy to take advantage of multi-modal input data, including images and point cloud, and make full use of temporal information to guide the network to learn natural and coherent human motions. Without relying on any 3D pose annotations, our method exploits the inherent geometry constraints of point cloud for self-supervision and utilizes 2D keypoints on images for weak supervision. Extensive experiments on public datasets and our newly collected dataset demonstrate the superiority and generalization capability of our proposed method.
translated by 谷歌翻译
Lifelong person re-identification (LReID) is in significant demand for real-world development as a large amount of ReID data is captured from diverse locations over time and cannot be accessed at once inherently. However, a key challenge for LReID is how to incrementally preserve old knowledge and gradually add new capabilities to the system. Unlike most existing LReID methods, which mainly focus on dealing with catastrophic forgetting, our focus is on a more challenging problem, which is, not only trying to reduce the forgetting on old tasks but also aiming to improve the model performance on both new and old tasks during the lifelong learning process. Inspired by the biological process of human cognition where the somatosensory neocortex and the hippocampus work together in memory consolidation, we formulated a model called Knowledge Refreshing and Consolidation (KRC) that achieves both positive forward and backward transfer. More specifically, a knowledge refreshing scheme is incorporated with the knowledge rehearsal mechanism to enable bi-directional knowledge transfer by introducing a dynamic memory model and an adaptive working model. Moreover, a knowledge consolidation scheme operating on the dual space further improves model stability over the long term. Extensive evaluations show KRC's superiority over the state-of-the-art LReID methods on challenging pedestrian benchmarks.
translated by 谷歌翻译
近年来,由于其在数字人物,角色产生和动画中的广泛应用,人们对3D人脸建模的兴趣越来越大。现有方法压倒性地强调了对面部的外部形状,质地和皮肤特性建模,而忽略了内部骨骼结构和外观之间的固有相关性。在本文中,我们使用学习的参数面部发电机提出了雕塑家,具有骨骼一致性的3D面部创作,旨在通过混合参数形态表示轻松地创建解剖上正确和视觉上令人信服的面部模型。雕塑家的核心是露西(Lucy),这是与整形外科医生合作的第一个大型形状面部脸部数据集。我们的Lucy数据集以最古老的人类祖先之一的化石命名,其中包含正牙手术前后全人头的高质量计算机断层扫描(CT)扫描,这对于评估手术结果至关重要。露西(Lucy)由144次扫描,分别对72名受试者(31名男性和41名女性)组成,其中每个受试者进行了两次CT扫描,并在恐惧后手术中进行了两次CT扫描。根据我们的Lucy数据集,我们学习了一个新颖的骨骼一致的参数面部发电机雕塑家,它可以创建独特而细微的面部特征,以帮助定义角色,同时保持生理声音。我们的雕塑家通过将3D脸的描绘成形状混合形状,姿势混合形状和面部表达混合形状,共同在统一数据驱动的框架下共同建模头骨,面部几何形状和面部外观。与现有方法相比,雕塑家在面部生成任务中保留了解剖学正确性和视觉现实主义。最后,我们展示了雕塑家在以前看不见的各种花式应用中的鲁棒性和有效性。
translated by 谷歌翻译
人际关系的阻塞和深度歧义使估计单眼多人的3D姿势是以摄像头为中心的坐标,这是一个具有挑战性的问题。典型的自上而下框架具有高计算冗余,并具有额外的检测阶段。相比之下,自下而上的方法的计算成本较低,因为它们受人数的影响较小。但是,大多数现有的自下而上方法将以摄像头3D为中心的人姿势估计视为两个无关的子任务:2.5D姿势估计和以相机为中心的深度估计。在本文中,我们提出了一个统一模型,该模型利用这两个子任务的相互益处。在框架内,稳健结构的2.5D姿势估计旨在基于深度关系识别人际遮挡。此外,我们开发了一种端到端几何感知的深度推理方法,该方法利用了2.5D姿势和以摄像头为中心的根深度的相互益处。该方法首先使用2.5D姿势和几何信息来推断向前通行证中以相机为中心的根深度,然后利用根深蒂固,以进一步改善向后通过的2.5D姿势估计的表示。此外,我们设计了一种自适应融合方案,该方案利用视觉感知和身体几何形状来减轻固有的深度歧义问题。广泛的实验证明了我们提出的模型比广泛的自下而上方法的优越性。我们的准确性甚至与自上而下的同行竞争。值得注意的是,我们的模型比现有的自下而上和自上而下的方法快得多。
translated by 谷歌翻译
主题进化建模近几十年来收到了重大关注。虽然已经提出了各种主题演进模型,但大多数研究都关注单一文件语料库。但是,在实践中,我们可以轻松访问来自多个来源的数据,并且还可以观察它们之间的关系。然后,识别多个文本语料库之间的关系并进一步利用这种关系来提高主题建模。在这项工作中,我们专注于两个文本语料库之间的特殊关系,我们将其定义为“滞后关系”。这种关系表征了一个文本语料库会影响未来在另一个文本语料库中讨论的主题的现象。要发现引导滞后关系,我们提出了一个共同动态的主题模型,并开发了嵌入扩展,以解决大规模文本语料库的建模问题。通过认可的引导关系,可以改善两个文本语料库的相似性,可以改善在两种语料中学习的主题质量。我们使用合成数据进行数值调查联合动态主题建模方法的性能。最后,我们在两个文本语料库上应用拟议的模型,包括统计文件和毕业论文。结果表明,拟议的模型可以很好地认识到两种语料库之间的引导滞后关系,也发现了两种语料库的具体和共享主题模式。
translated by 谷歌翻译
变压器在许多视觉任务上表现出优选的性能。然而,对于人的任务重新识别(Reid),Vanilla变形金刚将丰富的背景留下了高阶特征关系,这是由于行人的戏剧性变化而不足的局部特征细节。在这项工作中,我们提出了一个全部关系高阶变压器(OH-Figrain)来模拟Reid的全系关系功能。首先,为了加强视觉表示的能力,而不是基于每个空间位置的对查询和隔离键获得注意矩阵,我们进一步逐步以模拟非本地机制的高阶统计信息。我们以先前的混合机制在每个订单的相应层中共享注意力,以降低计算成本。然后,提出了一种基于卷积的本地关系感知模块来提取本地关系和2D位置信息。我们模型的实验结果是优越的有前途,其在市场上显示出最先进的性能-1501,Dukemtmc,MSMT17和occluded-Duke数据集。
translated by 谷歌翻译
生物医学问题的回答旨在从生物医学领域获得对给定问题的答案。由于其对生物医学领域知识的需求很高,因此模型很难从有限的培训数据中学习域知识。我们提出了一种上下文嵌入方法,该方法结合了在生物医学域数据上预先训练的开放域QA模型\ AOA和\ biobert模型。我们对大型生物医学语料库采用无监督的预培训,并在生物医学问题答案数据集上进行了微调。此外,我们采用基于MLP的模型加权层自动利用两个模型的优势以提供正确的答案。由PubMed语料库构建的公共数据集\ BIOMRC用于评估我们的方法。实验结果表明,我们的模型以大幅度优于最先进的系统。
translated by 谷歌翻译
快速可靠的连接对于提高公共安全关键任务(MC)用户的情境意识和运营效率至关重要。在紧急情况或灾害环境中,如果现有的蜂窝网络覆盖和容量可能无法满足MC通信需求,可以迅速地利用可部署网络的解决方案,例如单元轮/翼,以确保对MC用户的可靠连接。在本文中,我们考虑一种情况,其中宏基站(BS)由于自然灾害而被破坏,并且设置了携带BS(UAV-BS)的无人驾驶飞行器(UAV-BS)以为灾区中的用户提供临时覆盖。使用5G集成访问和回程(IAB)技术将UAV-BS集成到移动网络中。我们提出了一种框架和信令程序,用于将机器学习应用于此用例。深度加强学习算法旨在共同优化访问和回程天线倾斜以及UAV-BS的三维位置,以便在保持良好的回程连接的同时最佳地服务于地面MC用户。我们的结果表明,所提出的算法可以自主地导航和配置UAV-BS以提高吞吐量并降低MC用户的下降速率。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译